115 research outputs found

    Towards Data-Driven Autonomics in Data Centers

    Get PDF
    Continued reliance on human operators for managing data centers is a major impediment for them from ever reaching extreme dimensions. Large computer systems in general, and data centers in particular, will ultimately be managed using predictive computational and executable models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing low-level operations. Data-driven autonomics, where management and control are based on holistic predictive models that are built and updated using generated data, opens one possible path towards limiting the role of operators in data centers. In this paper, we present a data-science study of a public Google dataset collected in a 12K-node cluster with the goal of building and evaluating a predictive model for node failures. We use BigQuery, the big data SQL platform from the Google Cloud suite, to process massive amounts of data and generate a rich feature set characterizing machine state over time. We describe how an ensemble classifier can be built out of many Random Forest classifiers each trained on these features, to predict if machines will fail in a future 24-hour window. Our evaluation reveals that if we limit false positive rates to 5%, we can achieve true positive rates between 27% and 88% with precision varying between 50% and 72%. We discuss the practicality of including our predictive model as the central component of a data-driven autonomic manager and operating it on-line with live data streams (rather than off-line on data logs). All of the scripts used for BigQuery and classification analyses are publicly available from the authors' website.Comment: 12 pages, 6 figure

    Towards Operator-less Data Centers Through Data-Driven, Predictive, Proactive Autonomics

    Get PDF
    Continued reliance on human operators for managing data centers is a major impediment for them from ever reaching extreme dimensions. Large computer systems in general, and data centers in particular, will ultimately be managed using predictive computational and executable models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing low-level operations. Data-driven autonomics, where management and control are based on holistic predictive models that are built and updated using live data, opens one possible path towards limiting the role of operators in data centers. In this paper, we present a data-science study of a public Google dataset collected in a 12K-node cluster with the goal of building and evaluating predictive models for node failures. Our results support the practicality of a data-driven approach by showing the effectiveness of predictive models based on data found in typical data center logs. We use BigQuery, the big data SQL platform from the Google Cloud suite, to process massive amounts of data and generate a rich feature set characterizing node state over time. We describe how an ensemble classifier can be built out of many Random Forest classifiers each trained on these features, to predict if nodes will fail in a future 24-hour window. Our evaluation reveals that if we limit false positive rates to 5%, we can achieve true positive rates between 27% and 88% with precision varying between 50% and 72%.This level of performance allows us to recover large fraction of jobs' executions (by redirecting them to other nodes when a failure of the present node is predicted) that would otherwise have been wasted due to failures. [...

    T-MAN: gossip-based overlay topology management

    Get PDF
    Syftet med specialarbetet är att presentera genren allåldersböcker samt att ge litteraturtips till den intresserade läsaren. Med en kortfattad definition innebär begreppet allåldersböcker böcker som kan läsas med lika stor behållning av såväl barn och ungdom som vuxna läsare. Specialarbetet inleds med utdrag ur olika intervjuer som jag gjort med fackmänniskor i bokvärlden. Sedan följer ett fyrtiotal annotationer som jag skrivit efter att ha läst dessa allåldersböcker. Bokurvalet har gjorts efter rekommendationer av ovannämnda personer. Slutligen följer en förteckning över icke-annoterad allålderslitteratur som valts ut enligt samma principer som de övriga verken

    Predicting system-level power for a hybrid supercomputer

    Get PDF
    For current High Performance Computing systems to scale towards the holy grail of ExaFLOP performance, their power consumption has to be reduced by at least one order of magnitude. This goal can be achieved only through a combination of hardware and software advances. Being able to model and accurately predict the power consumption of large computational systems is necessary for software-level innovations such as proactive and power-aware scheduling, resource allocation and fault tolerance techniques. In this paper we present a 2-layer model of power consumption for a hybrid supercomputer (which held the top spot of the Green500 list on July 2013) that combines CPU, GPU and MIC technologies to achieve higher energy efficiency. Our model takes as input workload information - the number and location of resources that are used by each job at a certain time - and calculates the resulting system-level power consumption. When jobs are submitted to the system, the workload configuration can be foreseen based on the scheduler policies, and our model can then be applied to predict the ensuing system-level power consumption. Additionally, alternative workload configurations can be evaluated from a power perspective and more efficient ones can be selected. Applications of the model include not only power-aware scheduling but also prediction of anomalous behavior

    A Holistic Approach to Log Data Analysis in High-Performance Computing Systems: The Case of IBM Blue Gene/Q

    Get PDF
    The complexity and cost of managing high-performance computing infrastructures are on the rise. Automating management and repair through predictive models to minimize human interventions is an attempt to increase system availability and contain these costs. Building predictive models that are accurate enough to be useful in automatic management cannot be based on restricted log data from subsystems but requires a holistic approach to data analysis from disparate sources. Here we provide a detailed multi-scale characterization study based on four datasets reporting power consumption, temperature, workload, and hardware/software events for an IBM Blue Gene/Q installation.We show that the system runs a rich parallel workload, with low correlation among its components in terms of temperature and power, but higher correlation in terms of events. As expected, power and temperature correlate strongly, while events display negative correlations with load and power. Power and workload show moderate correlations, and only at the scale of components. The aim of the study is a systematic, integrated characterization of the computing infrastructure and discovery of correlation sources and levels to serve as basis for future predictive modeling efforts

    Gossip-based self-managing services for large scale dynamic networks

    Get PDF
    Modern IP networks are dynamic, large-scale and heterogeneous. This implies that they are more unpredictable and difficult to maintain and build upon. Implementation and management of decentralized applications that exploit these networks can be enabled only through a set of special middleware services that shield the application from the scale, dynamism and heterogeneity of the environment. Among others, these services have to provide communication services (routing, multicasting, etc.) and global information like network size, load distribution, etc. The goal is not to provide abstractions that hide the distributedness of the system, but rather, to hide the unpleasant features of the system, such as dynamism, scale and heterogeneity. Most importantly, these services have to be self-managing: they have to be able to maintain certain properties in the face of extreme dynamism of the network. In this manner, such services can serve as the lowest layer that makes possible building more complex applications, or simply as a plugin to enhance existing systems, for example, GRID environments. Apart from self-management, we require that the services be simple and lightweight, to allow easy implementation and incur low cost. Our approach to achieving these goals is based on the gossip communication model. Gossip protocols are simple, robust and scalable, besides, they can be applied to implement not only information dissemination, but several other functions, as we will show. So far, we have designed gossip-based protocols for maintaining random overlays, which define group membership. Based on this random overlay, we have designed gossip-based protocols to calculate aggregate values such as maxima, average, sum, variance, etc. We have also developed protocols to build several structured overlays in this framework, including superpeer, torus, ring, binary tree, etc. These protocols build on the random overlay and also on aggregate values. The gossip-based model is well suited to dynamic and large networks. Our protocols are extremely simple to implement while being robust and adaptive without adding any extra components or control loops. Our approach also support composition at a local level. At each node in the network, the same services are available: for example, data aggregation uses the random overlay (peer sampling service) and superpeer topology construction applies aggregate values, such as maximal and average capacity. In fact, protocols that implement the different services are heavily interconnected and form a modular system within this lighweight self-managing service layer. While this presentation focuses on the self-managing systems services, it is clear that other application-level services can also be built at higher layers. These services can be proactive, like load balancing, that can make use of the target (average) load and overlays for optimization of load transfer, or reactive, like broadcasting or search, that can be performed on top of an appropriate overlay network (eg spanning tree or superpeer network), maintained by the lighweight self-managing systems services

    A Big Data Analyzer for Large Trace Logs

    Full text link
    Current generation of Internet-based services are typically hosted on large data centers that take the form of warehouse-size structures housing tens of thousands of servers. Continued availability of a modern data center is the result of a complex orchestration among many internal and external actors including computing hardware, multiple layers of intricate software, networking and storage devices, electrical power and cooling plants. During the course of their operation, many of these components produce large amounts of data in the form of event and error logs that are essential not only for identifying and resolving problems but also for improving data center efficiency and management. Most of these activities would benefit significantly from data analytics techniques to exploit hidden statistical patterns and correlations that may be present in the data. The sheer volume of data to be analyzed makes uncovering these correlations and patterns a challenging task. This paper presents BiDAl, a prototype Java tool for log-data analysis that incorporates several Big Data technologies in order to simplify the task of extracting information from data traces produced by large clusters and server farms. BiDAl provides the user with several analysis languages (SQL, R and Hadoop MapReduce) and storage backends (HDFS and SQLite) that can be freely mixed and matched so that a custom tool for a specific task can be easily constructed. BiDAl has a modular architecture so that it can be extended with other backends and analysis languages in the future. In this paper we present the design of BiDAl and describe our experience using it to analyze publicly-available traces from Google data clusters, with the goal of building a realistic model of a complex data center.Comment: 26 pages, 10 figure

    Sequence-based global predicates for distributed computations : definitions and detection algorithms

    Get PDF
    We consider the problem of detecting sequences of predicates defined over global states of distributed computation. We introduce two new global predicate classes called simple sequences and interval-constrained sequences that define causally-ordered sets of desirable states along with intervening forbidden states. Our formalism is more general than former proposals and permits concise and intuitive expression of many interesting system properties. Algorithms are given for verifying formulas belonging to these predicate classes in an on-line and observer-independent manner during distributed computations. We illustrate the utility of our results by applying them to examples drawn from programs testing, debugging and dynamic reconfiguration in distributed systems
    • …
    corecore